126 research outputs found

    Compressed Sensing With Prior Information: Strategies, Geometry, and Bounds

    Get PDF
    We address the problem of compressed sensing (CS) with prior information: reconstruct a target CS signal with the aid of a similar signal that is known beforehand, our prior information. We integrate the additional knowledge of the similar signal into CS via 1-1 and 1-2 minimization. We then establish bounds on the number of measurements required by these problems to successfully reconstruct the original signal. Our bounds and geometrical interpretations reveal that if the prior information has good enough quality, 1-1 minimization improves the performance of CS dramatically. In contrast, 1-2 minimization has a performance very similar to classical CS, and brings no significant benefits. In addition, we use the insight provided by our bounds to design practical schemes to improve prior information. All our findings are illustrated with experimental results

    On the energy self-sustainability of IoT via distributed compressed sensing

    Get PDF
    This paper advocates the use of the distributed compressed sensing (DCS) paradigm to deploy energy harvesting (EH) Internet of Thing (IoT) devices for energy self-sustainability. We consider networks with signal/energy models that capture the fact that both the collected signals and the harvested energy of different devices can exhibit correlation. We provide theoretical analysis on the performance of both the classical compressive sensing (CS) approach and the proposed distributed CS (DCS)-based approach to data acquisition for EH IoT. Moreover, we perform an in-depth comparison of the proposed DCS-based approach against the distributed source coding (DSC) system. These performance characterizations and comparisons embody the effect of various system phenomena and parameters including signal correlation, EH correlation, network size, and energy availability level. Our results unveil that, the proposed approach offers significant increase in data gathering capability with respect to the CS-based approach, and offers a substantial reduction of the mean-squared error distortion with respect to the DSC system

    On the Energy Self-Sustainability of IoT via Distributed Compressed Sensing

    Get PDF
    This paper advocates the use of the distributed compressed sensing (DCS) paradigm to deploy energy harvesting (EH) Internet of Thing (IoT) devices for energy self-sustainability. We consider networks with signal/energy models that capture the fact that both the collected signals and the harvested energy of different devices can exhibit correlation. We provide theoretical analysis on the performance of both the classical compressive sensing (CS) approach and the proposed distributed CS (DCS)-based approach to data acquisition for EH IoT. Moreover, we perform an in-depth comparison of the proposed DCS- based approach against the distributed source coding (DSC) system. These performance characterizations and comparisons embody the effect of various system phenomena and parameters including signal correlation, EH correlation, network size, and energy availability level. Our results unveil that, the proposed approach offers significant increase in data gathering capability with respect to the CS-based approach, and offers a substantial reduction of the mean-squared error distortion with respect to the DSC system

    Distributed Joint Source-Channel Coding With Copula-Function-Based Correlation Modeling for Wireless Sensors Measuring Temperature

    Get PDF
    Wireless sensor networks (WSNs) deployed for temperature monitoring in indoor environments call for systems that perform efficient compression and reliable transmission of the measurements. This is known to be a challenging problem in such deployments, as highly efficient compression mechanisms impose a high computational cost at the encoder. In this paper, we propose a new distributed joint source-channel coding (DJSCC) solution for this problem. Our design allows for efficient compression and error-resilient transmission, with low computational complexity at the sensor. A new Slepian-Wolf code construction, based on non-systematic Raptor codes, is devised that achieves good performance at short code lengths, which are appropriate for temperature monitoring applications. A key contribution of this paper is a novel Copula-function-based modeling approach that accurately expresses the correlation amongst the temperature readings from colocated sensors. Experimental results using a WSN deployment reveal that, for lossless compression, the proposed Copula-function-based model leads to a notable encoding rate reduction (of up to 17.56%) compared with the state-of-the-art model in the literature. Using the proposed model, our DJSCC system achieves significant rate savings (up to 41.81%) against a baseline system that performs arithmetic entropy encoding of the measurements. Moreover, under channel losses, the transmission rate reduction against the state-of-the-art model reaches 19.64%, which leads to energy savings between 18.68% to 24.36% with respect to the baseline system

    Implementing and evaluating candidate-based invariant generation

    Get PDF
    The discovery of inductive invariants lies at the heart of static program verification. Presently, many automatic solutions to inductive invariant generation are inflexible, only applicable to certain classes of programs, or unpredictable. An automatic technique that circumvents these deficiencies to some extent is candidate-based invariant generation , whereby a large number of candidate invariants are guessed and then proven to be inductive or rejected using a sound program analyser. This paper describes our efforts to apply candidate-based invariant generation in GPUVerify, a static checker of programs that run on GPUs. We study a set of 383 GPU programs that contain loops, drawn from a number of open source suites and vendor SDKs. Among this set, 253 benchmarks require provision of loop invariants for verification to succeed. We describe the methodology we used to incrementally improve the invariant generation capabilities of GPUVerify to handle these benchmarks, through candidate-based invariant generation , whereby potential program invariants are speculated using cheap static analysis and subsequently either refuted or proven. We also describe a set of experiments that we used to examine the effectiveness of our rules for candidate generation, assessing rules based on their generality (the extent to which they generate candidate invariants), hit rate (the extent to which the generated candidates hold), effectiveness (the extent to which provable candidates actually help in allowing verification to succeed), and influence (the extent to which the success of one generation rule depends on candidates generated by another rule). We believe that our methodology for devising and evaluation candidate generation rules may serve as a useful framework for other researchers interested in candidate-based invariant generation. The candidates produced by GPUVerify help to verify 231 of these 253 programs. An increase in precision, however, has created sluggishness in GPUVerify because more candidates are generated and hence more time is spent on computing those which are inductive invariants. To speed up this process, we have investigated four under-approximating program analyses that aim to reject false candidates quickly and a framework whereby these analyses can run in sequence or in parallel. Across two platforms, running Windows and Linux, our results show that the best combination of these techniques running sequentially speeds up invariant generation across our benchmarks by 1 . 17 × (Windows) and 1 . 01 × (Linux), with per-benchmark best speedups of 93 . 58 × (Windows) and 48 . 34 × (Linux), and worst slowdowns of 10 . 24 × (Windows) and 43 . 31 × (Linux). We find that parallelising the strategies marginally improves overall invariant generation speedups to 1 . 27 × (Windows) and 1 . 11 × (Linux), maintains good best-case speedups of 91 . 18 × (Windows) and 44 . 60 × (Linux), and, importantly, dramatically reduces worst-case slowdowns to 3 . 15 × (Windows) and 3 . 17 × (Linux)

    Transtelephonic Electrocardiographic Transmission in the Preparticipation Screening of Athletes

    Get PDF
    Transtelephonic electrocardiographic transmission (TET) is the most widespread form of telecardiology since it enables clinicians to assess patients at a distance. The purpose of this study was to assess the efficacy and effectiveness of TET either by fixed telephone line (POTS) or by mobile phone in the preparticipation screening of young athletes. A total of 506 players, aged 20.5 ± 6.2 years, from 23 soccer clubs in the prefecture of Thessaloniki, Greece, were physically examined in their playfields by a general practitioner (GP) and had their ECG recorded. In 142 cases, and on the judgment of the GP, the ECG was transmitted via POTS and/or global system for mobile communications (GSM) to a specialised medical centre where it was evaluated by a cardiologist. The mean total time for recording, storing, and transmitting the ECG was four minutes per subject. It was found that the success rate for transmission at first attempt was similar for both fixed and mobile networks, that is, 93% and 91%, respectively. The failure rate in the GSM network was correlated to the reception level at the site of transmission. Only in about half (n = 74) of the transmitted ECGs did the cardiologist confirm “abnormal” findings, although in 16, they were considered to be clinically insignificant. Consequently, 58 athletes were referred for further medical examination. Our results indicate that TET (either by fixed telephone line or by mobile phone) can ensure valid, reliable, and objective measurements, and significantly contribute to the application of medical screening in a great number of athletes. Therefore, it is recommended as an alternative diagnostic tool for the preparticipation screening of athletes living in remote areas

    Dynamic Scheduling for Energy Minimization in Delay-Sensitive Stream Mining

    Get PDF
    Numerous stream mining applications, such as visual detection, online patient monitoring, and video search and retrieval, are emerging on both mobile and high-performance computing systems. These applications are subject to responsiveness (i.e., delay) constraints for user interactivity and, at the same time, must be optimized for energy efficiency. The increasingly heterogeneous power-versus-performance profile of modern hardware presents new opportunities for energy saving as well as challenges. For example, employing low-performance processing nodes can save energy but may violate delay requirements, whereas employing high-performance processing nodes can deliver a fast response but may unnecessarily waste energy. Existing scheduling algorithms balance energy versus delay assuming constant processing and power requirements throughout the execution of a stream mining task and without exploiting hardware heterogeneity. In this paper, we propose a novel framework for dynamic scheduling for energy minimization (DSE) that leverages this emerging hardware heterogeneity. By optimally determining the processing speeds for hardware executing classifiers, DSE minimizes the average energy consumption while satisfying an average delay constraint. To assess the performance of DSE, we build a face detection application based on the Viola-Jones classifier chain and conduct experimental studies via heterogeneous processor system emulation. The results show that, under the same delay requirement, DSE reduces the average energy consumption by up to 50% in comparison to conventional scheduling that does not exploit hardware heterogeneity. We also demonstrate that DSE is robust against processing node switching overhead and model inaccuracy

    Rate-distortion trade-offs in acquisition of signal parameters

    Get PDF
    We consider problems where one wishes to represent a parameter associated with a signal source - subject to a certain rate and distortion - based on the observation of a number of realizations of the source signal. By reducing these indirect vector quantization problems to a standard vector quantization one, we provide a bound to the fundamental interplay between the rate and distortion in the large-rate setting. We specialize this characterization to two particular quantization scenarios: i) the representation of the mean of a multivariate Gaussian source; and ii) the representation of the eigen-spectrum of a multivariate Gaussian source. Numerical results compare our quantization approach to an approach where one recovers the parameters from the representation of the source signals itself: in addition to revealing that the characterization is sharp in the large-rate setting, the results also show that our approach offers considerable gains

    Data aggregation and recovery for the Internet of Things: A compressive demixing approach

    Get PDF
    Large-scale wireless sensor networks (WSNs) and Internet-of-Things (IoT) applications involve diverse sensing devices collecting and transmitting massive amounts of heterogeneous data. In this paper, we propose a novel compressive data aggregation and recovery mechanism that reduces the global communication cost without introducing computational overhead at the network nodes. Following the principles of compressive demixing, each node of the network collects measurement readings from multiple sources and mixes them with readings from other nodes into a single low-dimensional measurement vector, which is then relayed to other nodes; the constituent signals are recovered at the sink using convex optimization. Our design achieves significant reduction in the overall network data rates compared to prior schemes based on (distributed) compressed sensing or compressed sensing with (multiple) side information. Experiments using real large-scale air-quality data demonstrate the superior performance of the proposed framework against state-of-the-art solutions, with and without the presence of measurement and transmission noise

    Multimodal image super-resolution via joint sparse representations induced by coupled dictionaries

    Get PDF
    Real-world data processing problems often involve various image modalities associated with a certain scene, including RGB images, infrared images, or multispectral images. The fact that different image modalities often share certain attributes, such as edges, textures, and other structure primitives, represents an opportunity to enhance various image processing tasks. This paper proposes a new approach to construct a high-resolution (HR) version of a low-resolution (LR) image, given another HR image modality as guidance, based on joint sparse representations induced by coupled dictionaries. The proposed approach captures complex dependency correlations, including similarities and disparities, between different image modalities in a learned sparse feature domain in lieu of the original image domain. It consists of two phases: coupled dictionary learning phase and coupled superresolution phase. The learning phase learns a set of dictionaries from the training dataset to couple different image modalities together in the sparse feature domain. In turn, the super-resolution phase leverages such dictionaries to construct an HR version of the LR target image with another related image modality for guidance. In the advanced version of our approach, multistage strategy and neighbourhood regression concept are introduced to further improve the model capacity and performance. Extensive guided image super-resolution experiments on real multimodal images demonstrate that the proposed approach admits distinctive advantages with respect to the state-of-the-art approaches, for example, overcoming the texture copying artifacts commonly resulting from inconsistency between the guidance and target images. Of particular relevance, the proposed model demonstrates much better robustness than competing deep models in a range of noisy scenarios
    corecore